In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project RUBRIC. Please save regularly.
By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
I. Exploratory Data Analysis
II. Rank Based Recommendations
III. User-User Based Collaborative Filtering
IV. Content Based Recommendations (EXTRA - NOT REQUIRED)
V. Matrix Factorization
VI. Extras & Concluding
At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
import pandas as pd
# set max display options for analysis
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 5)
import numpy as np
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import plotly.express as px
import plotly.io as pio
import project_tests as t
import pickle
%matplotlib inline
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df.head()
| article_id | title | ||
|---|---|---|---|
| 0 | 1430.0 | using pixiedust for fast, flexible, and easier... | ef5f11f77ba020cd36e1105a00ab868bbdbf7fe7 |
| 1 | 1314.0 | healthcare python streaming application demo | 083cbdfa93c8444beaa4c5f5e0f5f9198e4f9e0b |
| 2 | 1429.0 | use deep learning for image classification | b96a4f2e92d8572034b1e9b28f9ac673765cd074 |
| 3 | 1338.0 | ml optimization using cognitive assistant | 06485706b34a5c9bf2a0ecdac41daf7e7654ceb7 |
| 4 | 1276.0 | deploy your python model as a restful api | f01220c46fc92c6e6b161b1849de11faacd7ccb2 |
# Show df_content to get an idea of the data
df_content.head()
| doc_body | doc_description | doc_full_name | doc_status | article_id | |
|---|---|---|---|---|---|
| 0 | Skip navigation Sign in SearchLoading...\r\n\r... | Detect bad readings in real time using Python ... | Detect Malfunctioning IoT Sensors with Streami... | Live | 0 |
| 1 | No Free Hunch Navigation * kaggle.com\r\n\r\n ... | See the forest, see the trees. Here lies the c... | Communicating data science: A guide to present... | Live | 1 |
| 2 | ☰ * Login\r\n * Sign Up\r\n\r\n * Learning Pat... | Here’s this week’s news in Data Science and Bi... | This Week in Data Science (April 18, 2017) | Live | 2 |
| 3 | DATALAYER: HIGH THROUGHPUT, LOW LATENCY AT SCA... | Learn how distributed DBs solve the problem of... | DataLayer Conference: Boost the performance of... | Live | 3 |
| 4 | Skip navigation Sign in SearchLoading...\r\n\r... | This video demonstrates the power of IBM DataS... | Analyze NY Restaurant data using Spark in DSX | Live | 4 |
Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
1. What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
df_user_article_count = df.groupby('email').count()['article_id'].sort_values(ascending=False).to_frame()
# remane column article_id to count
df_user_article_count.rename(columns={'article_id':'interaction_count'}, inplace=True)
df_user_article_count.describe()
| interaction_count | |
|---|---|
| count | 5148.000000 |
| mean | 8.930847 |
| std | 16.802267 |
| min | 1.000000 |
| 25% | 1.000000 |
| 50% | 3.000000 |
| 75% | 9.000000 |
| max | 364.000000 |
print(f'The ave number of user-article interactions is {df_user_article_count.mean()[0]}.')
print(f'50% of individuals interacted with {df_user_article_count.quantile(0.5)[0]} articles or fewer.')
The ave number of user-article interactions is 8.930846930846931. 50% of individuals interacted with 3.0 articles or fewer.
# plot the distribution of how many articles a user interacts with in the dataset (plotly)
fig = px.histogram(df_user_article_count, x='interaction_count', nbins=100, title='Distribution of Total User-Article-Interactions')
fig.update_yaxes(title_text='Number of users per Interaction Count')
fig.update_xaxes(title_text='User-Article-Interactions')
fig.show()
print(f"It is clear that most users have very few interactions with articles. In fact, {df_user_article_count[df_user_article_count['interaction_count'] == 1].shape[0]} out of {df.email.nunique()} users have only 1 interaction with an article.")
It is clear that most users have very few interactions with articles. In fact, 1416 out of 5148 users have only 1 interaction with an article.
# Fill in the median and maximum number of user_article interactios below
# median_val = # 50% of individuals interact with ____ number of articles or fewer.
# max_views_by_user = # The maximum number of user-article interactions by any 1 user is ______.
median_val = df_user_article_count.quantile(0.5)[0]
max_views_by_user = df_user_article_count.max()[0]
print(f'50% of individuals interacted with {median_val} unique articles or fewer.')
print(f'The maximum number of unique user-article interactions by any 1 user is {max_views_by_user}.')
50% of individuals interacted with 3.0 unique articles or fewer. The maximum number of unique user-article interactions by any 1 user is 364.
2. Explore and remove duplicate articles from the df_content dataframe.
df_content.head()
| doc_body | doc_description | doc_full_name | doc_status | article_id | |
|---|---|---|---|---|---|
| 0 | Skip navigation Sign in SearchLoading...\r\n\r... | Detect bad readings in real time using Python ... | Detect Malfunctioning IoT Sensors with Streami... | Live | 0 |
| 1 | No Free Hunch Navigation * kaggle.com\r\n\r\n ... | See the forest, see the trees. Here lies the c... | Communicating data science: A guide to present... | Live | 1 |
| 2 | ☰ * Login\r\n * Sign Up\r\n\r\n * Learning Pat... | Here’s this week’s news in Data Science and Bi... | This Week in Data Science (April 18, 2017) | Live | 2 |
| 3 | DATALAYER: HIGH THROUGHPUT, LOW LATENCY AT SCA... | Learn how distributed DBs solve the problem of... | DataLayer Conference: Boost the performance of... | Live | 3 |
| 4 | Skip navigation Sign in SearchLoading...\r\n\r... | This video demonstrates the power of IBM DataS... | Analyze NY Restaurant data using Spark in DSX | Live | 4 |
# Find and explore duplicate articles in df_content
df_content[df_content.duplicated(subset=['article_id'], keep=False)].sort_values(by='article_id')
| doc_body | doc_description | doc_full_name | doc_status | article_id | |
|---|---|---|---|---|---|
| 50 | Follow Sign in / Sign up Home About Insight Da... | Community Detection at Scale | Graph-based machine learning | Live | 50 |
| 365 | Follow Sign in / Sign up Home About Insight Da... | During the seven-week Insight Data Engineering... | Graph-based machine learning | Live | 50 |
| 221 | * United States\r\n\r\nIBM® * Site map\r\n\r\n... | When used to make sense of huge amounts of con... | How smart catalogs can turn the big data flood... | Live | 221 |
| 692 | Homepage Follow Sign in / Sign up Homepage * H... | One of the earliest documented catalogs was co... | How smart catalogs can turn the big data flood... | Live | 221 |
| 232 | Homepage Follow Sign in Get started Homepage *... | If you are like most data scientists, you are ... | Self-service data preparation with IBM Data Re... | Live | 232 |
| 971 | Homepage Follow Sign in Get started * Home\r\n... | If you are like most data scientists, you are ... | Self-service data preparation with IBM Data Re... | Live | 232 |
| 399 | Homepage Follow Sign in Get started * Home\r\n... | Today’s world of data science leverages data f... | Using Apache Spark as a parallel processing fr... | Live | 398 |
| 761 | Homepage Follow Sign in Get started Homepage *... | Today’s world of data science leverages data f... | Using Apache Spark as a parallel processing fr... | Live | 398 |
| 578 | This video shows you how to construct queries ... | This video shows you how to construct queries ... | Use the Primary Index | Live | 577 |
| 970 | This video shows you how to construct queries ... | This video shows you how to construct queries ... | Use the Primary Index | Live | 577 |
# Remove any rows that have the same article_id - only keep the first
df_content.drop_duplicates(subset=['article_id'], keep='first', inplace=True)
3. Use the cells below to find:
a. The number of unique articles that have an interaction with a user.
b. The number of unique articles in the dataset (whether they have any interactions or not).
c. The number of unique users in the dataset. (excluding null values)
d. The number of user-article interactions in the dataset.
unique_articles = df.article_id.nunique() # The number of unique articles that have at least one interaction
total_articles = df_content.shape[0] # The number of unique articles on the IBM platform (duplicates are removed)
unique_users = df.email.nunique() # The number of unique users
user_article_interactions = df.shape[0] # The number of user-article interactions
print(f'The number of unique articles that have at least one interaction is {unique_articles}.')
print(f'The number of unique articles on the IBM platform is {total_articles}.')
print(f'The number of unique users is {unique_users}.')
print(f'The number of user-article interactions is {user_article_interactions}.')
The number of unique articles that have at least one interaction is 714. The number of unique articles on the IBM platform is 1051. The number of unique users is 5148. The number of user-article interactions is 45993.
4. Use the cells below to find the most viewed article_id, as well as how often it was viewed. After talking to the company leaders, the email_mapper function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
df.groupby('article_id').count()['email'].sort_values(ascending=False).head(1).values[0]
937
most_viewed_article_id = str(df.article_id.value_counts().index[0]) # The most viewed article in the dataset as a string with one value following the decimal
max_views = df.groupby('article_id').count()['email'].sort_values(ascending=False).head(1).values[0] # The most viewed article in the dataset was viewed how many times?
## No need to change the code here - this will be helpful for later parts of the notebook
# Run this cell to map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
| article_id | title | user_id | |
|---|---|---|---|
| 0 | 1430.0 | using pixiedust for fast, flexible, and easier... | 1 |
| 1 | 1314.0 | healthcare python streaming application demo | 2 |
| 2 | 1429.0 | use deep learning for image classification | 3 |
| 3 | 1338.0 | ml optimization using cognitive assistant | 4 |
| 4 | 1276.0 | deploy your python model as a restful api | 5 |
df.dtypes
article_id float64 title object user_id int64 dtype: object
# make article_id a string
df['article_id'] = df['article_id'].astype(str)
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
It looks like you have everything right here! Nice job!
Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
1. Fill in the function below to return the n top articles ordered with most interactions as the top. Test your function using the tests below.
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# get top n article ids from df
# top_n_articles_ids = df.article_id.value_counts().head(n).index
# get top n article titles from df using top_articles_ids
# top_articles = df[df.article_id.isin(top_n_articles_ids)].title.unique().tolist()
# df.groupby('title').count()['user_id'].sort_values(ascending=False).head(10).index
top_articles = df.title.value_counts().head(n).index.to_list()
return top_articles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# get top n article ids from df
top_articles = df.article_id.value_counts().head(n).index.to_list()
return top_articles # Return the top article ids
print(get_top_articles(10))
print(get_top_article_ids(10))
['use deep learning for image classification', 'insights from new york car accident reports', 'visualize car data with brunel', 'use xgboost, scikit-learn & ibm watson machine learning apis', 'predicting churn with the spss random tree algorithm', 'healthcare python streaming application demo', 'finding optimal locations of new store using decision optimization', 'apache spark lab, part 1: basic concepts', 'analyze energy consumption in buildings', 'gosales transactions for logistic regression model'] ['1429.0', '1330.0', '1431.0', '1427.0', '1364.0', '1314.0', '1293.0', '1170.0', '1162.0', '1304.0']
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
Your top_5 looks like the solution list! Nice job. Your top_10 looks like the solution list! Nice job. Your top_20 looks like the solution list! Nice job.
1. Use the function below to reformat the df dataframe to be shaped with users as the rows and articles as the columns.
Each user should only appear in each row once.
Each article should only show up in one column.
If a user has interacted with an article, then place a 1 where the user-row meets for that article-column. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
If a user has not interacted with an item, then place a zero where the user-row meets for that article-column.
Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
df_matrix = df.groupby(['user_id', 'article_id'])['title'].max().unstack()
user_item = df_matrix.notnull().astype('int')
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
You have passed our quick tests! Please proceed!
2. Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
Use the tests to test your function.
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# compute similarity of each user to the provided user
similarity_to_user_id = user_item.dot(user_item.loc[user_id])
# sort by similarity
similarity_to_user_id_sorted = similarity_to_user_id.sort_values(ascending=False)
# create list of just the ids
most_similar_users = similarity_to_user_id_sorted.index.tolist()
# remove the own user's id
most_similar_users.remove(user_id)
return most_similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
The 10 most similar users to user 1 are: [3933, 23, 3782, 203, 4459, 3870, 131, 4201, 46, 5041] The 5 most similar users to user 3933 are: [1, 23, 3782, 203, 4459] The 3 most similar users to user 46 are: [4201, 3782, 23]
3. Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
# get article names from df using article ids
article_names = df[df['article_id'].isin(article_ids)]['title'].unique().tolist()
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the doc_full_name column in df_content)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
# get list of article ids seen by user
article_ids = user_item.loc[user_id][user_item.loc[user_id] == 1].index.tolist()
# get article names based on article ids
article_names = get_article_names(article_ids)
return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
# find similar users
similar_users = find_similar_users(user_id)
# get articles seen by user
user_article_ids = get_user_articles(user_id)[0]
# set up recommendations
recs = []
# loop through similar users
for user in similar_users:
# get articles seen by similar user
similar_user_article_ids = get_user_articles(user)[0]
# get recommendations for user
current_recs = [article for article in similar_user_article_ids if article not in user_article_ids]
# add recommendations to list until m recommendations are found
for article in current_recs:
if len(recs) < m:
recs.append(article)
else:
break
return recs # return your recommendations for this user_id
# Check Results
get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1
['analyze energy consumption in buildings', 'analyze accident reports on amazon emr spark', '520 using notebooks with pixiedust for fast, flexi...\nName: title, dtype: object', '1448 i ranked every intro to data science course on...\nName: title, dtype: object', 'data tidying in data science experience', 'airbnb data for analytics: vancouver listings', 'recommender systems: approaches & algorithms', 'airbnb data for analytics: mallorca reviews', 'analyze facebook data using ibm watson and watson studio', 'a tensorflow regression model to predict house values']
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
If this is all you see, you passed all of our tests! Nice job!
4. Now we are going to improve the consistency of the user_user_recs function from above.
Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the top_articles function you wrote earlier.
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
# get similarity to user_id df and rename columns
neighbors_df = user_item.dot(user_item.loc[user_id]).sort_values(ascending=False).to_frame().reset_index().rename(columns={0: 'similarity', 'user_id': 'neighbor_id'})
neighbors_df = neighbors_df[neighbors_df['neighbor_id'] != user_id]
# create a dictionary of user_id and number of interactions for all neighbor_id in test_df
user_interactions = df[df['user_id'].isin(neighbors_df['neighbor_id'])].groupby('user_id')['article_id'].count().sort_values(ascending=False).to_dict()
# create a column for number of interactions in test_df called 'num_interactions' based on user_interactions dictionary
neighbors_df['num_interactions'] = neighbors_df['neighbor_id'].map(user_interactions)
# sort by similarity and then by number of interactions
neighbors_df = neighbors_df.sort_values(by=['similarity', 'num_interactions'], ascending=False)
return neighbors_df # Return the dataframe specified in the doc_string
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
# find similar users
top_similar_users = get_top_sorted_users(user_id)['neighbor_id']
# get articles seen by user
user_article_ids = get_user_articles(user_id)[0]
# set up recommendations
recs = []
# loop through similar users
for user in top_similar_users:
# get articles seen by similar user
similar_user_article_ids = get_user_articles(user)[0]
# order articles by number of interactions
similar_user_article_ids = df[df['article_id'].isin(similar_user_article_ids)].groupby('article_id')['user_id'].count().sort_values(ascending=False).index.tolist()
# get recommendations for user
current_recs = [article for article in similar_user_article_ids if article not in user_article_ids]
# add recommendations to list until m recommendations are found
for article in current_recs:
if len(recs) < m:
recs.append(article)
else:
break
rec_names = get_article_names(recs)
return recs, rec_names
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
The top 10 recommendations for user 20 are the following article ids: ['1330.0', '1427.0', '1364.0', '1170.0', '1162.0', '1304.0', '1351.0', '1160.0', '1354.0', '1368.0'] The top 10 recommendations for user 20 are the following article names: ['apache spark lab, part 1: basic concepts', 'predicting churn with the spss random tree algorithm', 'analyze energy consumption in buildings', 'use xgboost, scikit-learn & ibm watson machine learning apis', 'putting a human face on machine learning', 'gosales transactions for logistic regression model', 'insights from new york car accident reports', 'model bike sharing data with spss', 'analyze accident reports on amazon emr spark', 'movie recommender system with spark machine learning']
5. Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
### Tests with a dictionary of results
user1_most_sim = get_top_sorted_users(1).iloc[0]['neighbor_id'] # Find the user that is most similar to user 1
user131_10th_sim = user131_10th_sim = get_top_sorted_users(131).iloc[9]['neighbor_id'] # Find the 10th most similar user to user 131
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
This all looks good! Nice job!
6. If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
Because we do not have knowledge about the user ie. which articles the user has viewed and therefore how similar this user is to other users, we cannot use our collaborative filtering function (user_user_recs). We can only use rank based recommendations and recommend the most popular articles to the user (get_top_articles).
7. Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = get_top_article_ids(10) # Your recommendations here
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
That's right! Nice job!
In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
1. You should have already created a user_item matrix above in question 1 of Part III above. This first question here will just require that you run the cells to get things set up for the rest of Part V of the notebook.
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.p')
# quick look at the matrix
user_item_matrix.head()
| article_id | 0.0 | 100.0 | ... | 996.0 | 997.0 |
|---|---|---|---|---|---|
| user_id | |||||
| 1 | 0.0 | 0.0 | ... | 0.0 | 0.0 |
| 2 | 0.0 | 0.0 | ... | 0.0 | 0.0 |
| 3 | 0.0 | 0.0 | ... | 0.0 | 0.0 |
| 4 | 0.0 | 0.0 | ... | 0.0 | 0.0 |
| 5 | 0.0 | 0.0 | ... | 0.0 | 0.0 |
5 rows × 714 columns
2. In this situation, you can use Singular Value Decomposition from numpy on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
# Perform SVD on the User-Item Matrix Here
u, s, vt = np.linalg.svd(user_item_matrix) # use the built in to get the three matrices
print(u.shape, s.shape, vt.shape)
(5149, 5149) (714,) (714, 714)
In this case we are only predicting if a user would interact with an article, and not if they rate the article as good. There are no missing vals so SVD will work.
3. Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
4. From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
# create user-item matrices for train and test sets
user_item_train = create_user_item_matrix(df_train)
user_item_test = create_user_item_matrix(df_test)
# get test user ids and article ids
test_idx = user_item_test.index.tolist()
test_arts = user_item_test.columns.tolist()
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
# how many user_ids are in both sets?
a = len(set(user_item_train.index) & set(test_idx))
# how many users are in the test set are we not able to make predictions for because of the cold start problem?
b = len(set(test_idx) - set(user_item_train.index))
# how many articles can we make predictions for in the test set?
c = len(set(test_arts) & set(user_item_train.columns))
# how many articles in the test set are we not able to make predictions for because of the cold start problem?
d = len(set(test_arts) - set(user_item_train.columns))
print(a, b, c, d)
20 662 574 0
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?': c # letter here,
,'How many users in the test set are we not able to make predictions for because of the cold start problem?': a # letter here,
,'How many articles can we make predictions for in the test set?': b # letter here,
,'How many articles in the test set are we not able to make predictions for because of the cold start problem?': d # letter here
}
t.sol_4_test(sol_4_dict)
Awesome job! That's right! All of the test articles are in the training data, but there are only 20 test users that were also in the training set. All of the other users that are in the test set we have no data on. Therefore, we cannot make predictions for these users using SVD.
5. Now use the user_item_train dataset from above to find U, S, and V transpose using SVD.
Then find the subset of rows in the user_item_test dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data.
This will require combining what was done in questions 2 - 4.
Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train) # fit svd similar to above then use the cells below
# Use these cells to see how well you can use the training
# decomposition to predict on test data
# get subset of users and articles that are in both training and test sets
train_idx = user_item_train.index
train_arts = user_item_train.columns
common_idx = list(set(train_idx) & set(test_idx))
common_arts = list(set(train_arts) & set(test_arts))
# find the position of each common user and article in the training matrix
common_idx_train = np.where(np.in1d(train_idx, common_idx))[0]
common_arts_train = np.where(np.in1d(train_arts, common_arts))[0]
# get subset of user_item_test matrix that only contains users and articles that are in both training and test sets
user_item_test_subset = user_item_test.loc[common_idx, common_arts]
# initialize testing parameters
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
# loop through latent features
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s_train[:k]), u_train[common_idx_train, :k], vt_train[:k, common_arts_train]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_test_subset, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
# plot accuracy vs. number of latent features
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
6. Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
Accuracy decreases as the number of latent features increases. Accuracy is also not a good measure for an imbalanced dataset such as this one. We can use F1 score instead.
Alternative recommendation systems such as content based recommendation systems or A/B testing can be used to determine if the recommendations are an improvement to how users currently find articles.
# is the data in user_item_train imbalanced?
# count total number of 1s and 0s in user_item_train
num_ones = np.sum(np.sum(user_item_train == 1))
num_zeros = np.sum(np.sum(user_item_train == 0))
print('Number of 1s in user_item_train: {}'.format(num_ones))
print('Number of 0s in user_item_train: {}'.format(num_zeros))
Number of 1s in user_item_train: 29264 Number of 0s in user_item_train: 3174454
Using your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!
Congratulations! You have reached the end of the Recommendations with IBM project!
Tip: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the rubric. You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible.
Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).
Alternatively, you can download this report as .html via the File > Download as submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.
Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb','--to', 'html'])
[NbConvertApp] WARNING | pattern '--to html' matched no files
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/henriettewevell/wsl-repos/udacity_recommendation_engine/venv/lib/python3.10/site-packages/nbconvert/__main__.py", line 4, in <module>
main()
File "/home/henriettewevell/wsl-repos/udacity_recommendation_engine/venv/lib/python3.10/site-packages/jupyter_core/application.py", line 277, in launch_instance
return super().launch_instance(argv=argv, **kwargs)
File "/home/henriettewevell/wsl-repos/udacity_recommendation_engine/venv/lib/python3.10/site-packages/traitlets/config/application.py", line 1043, in launch_instance
app.start()
File "/home/henriettewevell/wsl-repos/udacity_recommendation_engine/venv/lib/python3.10/site-packages/nbconvert/nbconvertapp.py", line 414, in start
self.convert_notebooks()
File "/home/henriettewevell/wsl-repos/udacity_recommendation_engine/venv/lib/python3.10/site-packages/nbconvert/nbconvertapp.py", line 576, in convert_notebooks
raise ValueError(msg)
ValueError: Please specify an output format with '--to <format>'.
The following formats are available: ['asciidoc', 'custom', 'html', 'latex', 'markdown', 'notebook', 'pdf', 'python', 'qtpdf', 'qtpng', 'rst', 'script', 'slides', 'webpdf']
1